Goto

Collaborating Authors

 heterogeneous attribute space


Supplemental Material: Meta-learning from Tasks with Heterogeneous Attribute Spaces

Neural Information Processing Systems

With NP, we used deep sets for handling tasks with heterogeneous attribute spaces. DS+FT (NP+FT) was the DS (NP) fine-tuned with each target dataset. The number of fine-tuning epochs was five. NP+FT, NP+MAML, and the proposed method. Results Table 2 shows the mean squared error for each target task.


Meta-learning from Tasks with Heterogeneous Attribute Spaces

Neural Information Processing Systems

We propose a heterogeneous meta-learning method that trains a model on tasks with various attribute spaces, such that it can solve unseen tasks whose attribute spaces are different from the training tasks given a few labeled instances. Although many meta-learning methods have been proposed, they assume that all training and target tasks share the same attribute space, and they are inapplicable when attribute sizes are different across tasks. Our model infers latent representations of each attribute and each response from a few labeled instances using an inference network. Then, responses of unlabeled instances are predicted with the inferred representations using a prediction network. The attribute and response representations enable us to make predictions based on the task-specific properties of attributes and responses even when attribute and response sizes are different across tasks. In our experiments with synthetic datasets and 59 datasets in OpenML, we demonstrate that our proposed method can predict the responses given a few labeled instances in new tasks after being trained with tasks with heterogeneous attribute spaces.


Supplemental Material: Meta-learning from Tasks with Heterogeneous Attribute Spaces

Neural Information Processing Systems

With NP, we used deep sets for handling tasks with heterogeneous attribute spaces. DS+FT (NP+FT) was the DS (NP) fine-tuned with each target dataset. The number of fine-tuning epochs was five. NP+FT, NP+MAML, and the proposed method. Results Table 2 shows the mean squared error for each target task.


Review for NeurIPS paper: Meta-learning from Tasks with Heterogeneous Attribute Spaces

Neural Information Processing Systems

Weaknesses: (i) Missing References and Comparisons: Comparison (and citations) with CNAP [1] and self-attention based approaches [2] should be included. Self-attention is itself permutation-invariant (unless you use positional encoding). In a way, self-attention "generalises" the summation operation as it performs a weighted summation of different attention vectors. By setting all keys and queries to 1.0, you effectively end up with the Deep Sets architecture. I also feel a comparison with Prototypical Nets [3] for the few-shot classification setting is needed seeing its close resemblance of the way latent attribute vectors are calculated. I feel that showing results on synthetic data and OpenML data ( relatively easy) is not that interesting.


Review for NeurIPS paper: Meta-learning from Tasks with Heterogeneous Attribute Spaces

Neural Information Processing Systems

This paper proposed a method for shot learning in heterogeneous attribute spaces, and shows that it performs well in a set of evaluations from synthetic tasks and OpenML datasets. Although 75% of the reviewers were leaning towards reject initially, the rebuttal and reviewer 3 convinced the dissenting majority (in fact, all reviewers who participated in the discussion) to lean towards supporting acceptance. The paper has a few outstanding weaknesses which can in part be addressed by revising the writing, and in part in future work, but for now it sufficiently proves the concept in a new area to warrant publication.


Meta-learning from Tasks with Heterogeneous Attribute Spaces

Neural Information Processing Systems

We propose a heterogeneous meta-learning method that trains a model on tasks with various attribute spaces, such that it can solve unseen tasks whose attribute spaces are different from the training tasks given a few labeled instances. Although many meta-learning methods have been proposed, they assume that all training and target tasks share the same attribute space, and they are inapplicable when attribute sizes are different across tasks. Our model infers latent representations of each attribute and each response from a few labeled instances using an inference network. Then, responses of unlabeled instances are predicted with the inferred representations using a prediction network. The attribute and response representations enable us to make predictions based on the task-specific properties of attributes and responses even when attribute and response sizes are different across tasks.